|
![](/i/fill.gif) |
All of what I explore in the following came to me is sort of
a flash one day and then I got around to running the numbers to
see where we stand. I finally found 1124 lines resolution instead
of the present TV standard of 525 lines and that was not as easy
as you might think.
The aspect ratio and movie frames per second are correct. I
could not find if HDTV was going to stick with 60 half frames as
at present or going to change so I used the movie frames per
second of 24. So if anyone wants to do their own calculations,
those numbers are correct.
All the verbiage about the calculations rounded to
convenient results. Of course, demons can still have snuck in.
* * * * *
Lets run some numbers to compare HDTV to POV-Ray. HDTV is
1124 lines with a 16:9 aspect ratio (1998x1124 = 2,246,000) 2.25
million pixels. This is also touted as the quality of 35mm
movies. I question that but at least on the new TV screens I'll
take their word for it.
For convenience the IRTC/medium screen resolution of 800x600
which equals 480,000, roughly a half million pixels.
And the ratio of the two roughly 4.5:1. Up front, what we
can do at home we can do in the new format 4.5 times slower.
Movie quality is 24 frames per second vice TV's 30 fps. Now
we need to render 2.25x24 for one second worth of movie or 54
million pixels.
However parallel processors are "hobby" projects all over
the world. It would be nice to have 2.25 million processors in
parallel so each could render its own pixel. On a PII 333MHz
(128M, Win98, POV 3.1) machine only the most complex internal
reflection scenes (a "city of glass" for example) fall below 24
pixels per second. If we had 2.25 million processors, each could
render its assigned pixel 24 times a second resulting in the
required 24 frames per second.
If we assume that is the worse case average for a broad
range of scenes such as we might find in a movie (noting rendered
movies have so far stayed very far away from such complex scenes)
we have a baseline for estimating the productivity of parallel
processing.
For example, 64k processors in parallel would result in
(2.25M/64k) 830 processor seconds per second of movie, roughly 14
minutes per second. A two hour movie would require 1660 hours or
about 70 days using the 24 pixels per second baseline of an Intel
machine under Win98. This we can consider a worst case.
Without designing the 64k parallel machine we cannot say
what its speed would be like but at least it should be faster
than under Win98. As a modest estimate of the improvement we note
that POV-Ray under Linux is approximately twice as fast. So we
are talking 35 days per two hour movie worst case, any kind of
scene. Or rather we can talk about 415 seconds per second of
screen time.
There are other applications than movies. The 30 second
commercial renders in three and a half hours. Ten minutes of
fresh animation for "Babylon 5 1/2: The Revenge of Vir" in 70
hours. And all of this under the worst case "City of Glass"
assumption.
The kind of rendered scenes in B5 and the common rendered
movies and from my POV-Ray experience (that is, my gut feel only)
render 100 to 200 times faster. Going back to 415 seconds per
second we have say three seconds per second.
Clearly, even at the research level now we are at a
rendering speed where creation time is far greater than rendering
time.
If we talk a more modest 1024 processors we go from three
seconds per second only up to three minutes per second.
Post a reply to this message
|
![](/i/fill.gif) |